ECNP e-news
Machine learning and the future of predictive psychiatry
Tim Hahn

Tim Hahn is Group Leader of AI Development at the Translational Psychiatry Lab of the University of Münster, Germany. He is working to build predictive models to support differential diagnosis, and to determine individual patient risk and treatment response[1]. Addressing these central issues of personalised medicine in psychiatry has been made possible by the continuing evolution of big data and machine learning methods – a topic that he will be discussing during his symposium at the 31st ECNP Congress.

The need for predictive analytics stems from two central issues: the legacy of disorder categorisation, and that of statistical paradigms. Traditionally, psychiatry has focused on uncovering mechanisms involved in particular psychiatric disorders, from which new targets are investigated, culminating in clinical trial. This process of scientific investigation hinges upon classical disorder definitions (the latest incarnation of which lies in ICD-11 and DSM-5), and the differences between groups of people with a particular disorder and healthy controls. The now familiar challenges of psychiatry research – the many risk factors, both genetic and environmental in origin, that convolute in different ways to give rise to a particular disorder or disorders – present a complexity that exceed the capacities of traditional analytical approaches[2].

“The biggest challenge is in bringing the scientific advances of the last two to three decades to the patient,” Dr Hahn told ECNP. “The thing that we are doing in psychiatry currently is that we are trying to make statistical inferences based on groups. But if I find a difference between two groups, it doesn’t necessarily help me to figure out what is wrong with the next person that steps into my office.”

“A second issue is that, in psychiatry, everything relies on the phenotype. So it might actually be that the people we are calling bipolar patients or depressed patients are very likely a heterogeneous group of people with very different underlying neurophysiological make-ups. That is a fundamental problem, because if you were to try to make a test for a diagnosis, you would have to split up these groups first and test for each one of the subgroups. The problem is that in psychiatry we are only beginning to understand what these subgroups might look like.”

Machine learning has the potential to confront these two difficulties head-on, as it can be applied to appreciate trends in vast swathes of complex data, without the bias of preconceptions about familiar diagnostic categories. As such, it can be appreciated simply as an advanced statistical approach to the demands of today’s data[3]. Such approaches leverage the advent of big data as well as advances in computing power in order to both cluster data and make predictions. This data-driven attitude is also reflective of developing contemporary ideas such as cross-diagnostic dimensions of psychiatric illness[4,5].

These methods have been developing over the past decade, explained Dr Hahn. “When we started out in 2008 and 2009, the sample sizes were tiny comparatively. Back then, getting an MRI study sample of 60 people was an effort. Luckily, the necessary big data infrastructure has been established over the last ten years, at first in genetics where large-scale consortia have been well-established for at least a decade, but also in neuroimaging where large-scale studies have emerged in the last ten years. Now, slowly, the benefits of these efforts are trickling down, because we can now do machine learning for structural and functional MRI data in 10,000 or 20,000 people, which makes it a lot easier.”

Deep learning
During the machine learning symposium at ECNP Congress this year, Dr Hahn will discuss deep learning – which is set apart by its ability to perform sequential non-linear abstractions of raw data. As such, the functions representing increasingly complex or abstract features can be ‘learned’ by building upon lower-level concepts. Crucially, this can be done in a general manner, driven by general pattern recognition procedures[10].

In contrast, classical machine learning relies on the input of domain knowledge by human experts, explained Dr Hahn: “These experts sit down with the bioinformatics people and try to figure out how to generate from a huge heap of data a few specific variables that are very informative, with which you can do your regression or classification task. That step is called ‘feature engineering’: you build from, say, 2 million genetics SNPs, or 300,000 voxels in an MRI, anywhere between five and 5,000 very informative variables, that you can put into a machine learning model. This feature engineering is very time consuming. That is the first problem. The second problem is that you need to repeat this again for each problem and each dataset. This is incredibly difficult and takes a long time.

“Deep learning tries to cut the human out of that loop. So the endless discussion of whether we should, for example, look at the volume of the hippocampus or the thickness of the frontal cortex, is left to the machine. The major difference is that you put the raw data into the deep learning machinery. Then it transforms the data in a way that, down the line – down the stack of neural network layers – it will figure out the most informative, low number of variables itself. Then it will do the very same thing: the classification, the regression, on these variables.”

One of the first and now best-known examples of deep learning in practice is object recognition. “For thirty years, computer vision analysts have tried to figure out ways to sum up pixels and variables in terms of angle, shade, etc. It never quite worked. So people looked into the human visual cortex, where you can actually see that the pixels from the retina are processed in a very specific way, starting with Gabor filters – where we figure out the angles, the shades, the colour, and then add it all up again later. So they tried to train specific deep learning networks that would mimic this process. The most fascinating thing was that the neural network, that didn’t know anything about the visual cortex, learned highly similar steps – our visual cortex seems to be a rather optimal way of processing natural things.

“We are trying to do a very similar thing in trying to figure out the basic building blocks of genetic information or neural information, that we can use to build up the entire image – and learning them, rather than a human trying to figure them out using hard science which is so insanely difficult. We are just trying a different approach. These problems might just be too difficult for us to reverse engineer, so let the machine reverse engineer the brain.”

Dealing with complex data
Asked whether he encounters any scepticism from the psychiatry community, particularly in light of the tension between biological psychiatry and the phenomenology of psychiatric disorder, Dr Hahn responded: “I think that there is a lot of opposition in that way. But it is crumbling, due to the fact that it is rooted in a misunderstanding.”

Machine learning methods have recently been discussed in areas of healthcare such as drug discovery[6], in clinical psychology and psychiatry[7], in genomics[8] and neuroimaging[9]. But Dr Hahn was keen to stress that biomarkers and big data need not – and should not – be restricted to imaging and omics.

“As a machine-learning person in psychiatry, I personally don’t care whether it is neuroimaging data or genetic data or psychometric data or a video of an interview with a patient done by a psychiatrist. This is all in a sense complex data. What we are trying to do is simply to label states of a complex dynamic physical system. If that system is the brain, or the CNS, or the person, or the social group, or the communication between a bunch of people – this, from my mathematical point of view, is not really relevant. Where we are starting is genetic information and MRI information, simply because these two fields have been the first to organise large data consortia to do this with, so this really is the best place to start from a data science perspective.

“I think this criticism is true if there is a neurocentric view or a biocentric view on psychiatry. But AI is absolutely not committed to any of these branches. Its very strength lies in the fact that it can take data of different origins – and even combine them. The biggest breakthroughs in the next couple of years are going to come from combining data in this way.

“Actually, we are fostering a project like this here in Germany to use smartphones to track people every day for two years on how much they move, how much they use their phones, how many steps they take… to really get a source of information that we believe is a lot closer to the experience of a person. If we could monitor the daily lives of people, it would make a lot more sense to use this data, because everything in psychiatry is based on the phenotype – on watching people, making observations, asking them questions.”

Another potential criticism is common to any arriving technology: namely, how do we handle its ethical implications? Should we be worried that we no longer know what is inside the black box? While this lies outside of the scope of the symposium, Dr Hahn was keen to highlight the importance of exploring these issues. Opening the black box, he said, is a misdirection away from the central issues of AI.

“We already live in a world with many black boxes, particularly in medicine,” he said, citing an example: “If you ask a GP whether he can tell you how the lab results are obtained, he probably won’t know. If you bring in the lab technician, who knows every single step of the deterministic way in which the lab results have been obtained, and he tells every little step to this to the patient, the patient would technically know everything about that process. But in fact, the patient would be none the wiser. The main concern about AI is that it can’t be understood. That is absolutely true. But that is true for many aspects of life.”

Dr Hahn concluded by discussing the future of the field. “First, we need to finally start gathering real-world data at large scale. In contrast to today's studies, which usually have rather strong exclusion criteria, this would mean acquiring data from everyone or at least representative samples of patients. While homogeneous, well-curated samples are important to gain scientific insight, acquiring real-life samples at scale is the only way to ensure that machine learning models will generalise to new data. Unfortunately, even large consortia still sometimes restrict their data acquisition to subgroups of patients.”

“Second, while the advances of the last ten years in machine learning in psychiatry are more than encouraging, replication and independent model evaluation has been neglected. Thus, we see that many results of the past decade cannot be replicated. To ensure our models are robust and replicable, we have to start training and testing them across many data sets and institutions. To this end, we have recently created a common format to save machine learning model pipelines as well as built an online repository (www.photon-ai.com/repo) where researchers can upload their models to have them tested or trained further by other researchers. I believe that only through rigorous replication and independent testing, we can gain the confidence we need to use machine learning models in the clinic.

“Third, we need to devise a way in which to certify AI models for clinical use. While the focus of the general public and of many politicians seems to be on ‘knowing the inner workings of the algorithm’, this would not at all be helpful. What we really need is AI transparency; i.e. an in-depth risk analysis of any AI model and the data it was trained on. It does not help to know the millions of mathematical operations performed by an algorithm, but it is absolutely mandatory we know exactly how an algorithm performs in different age groups or genders. Directing the discussion away from algorithm disclosure and towards AI transparency is a political process we need to start before the first models enter the clinic. Unfortunately, in the field of medicine, little is happening in that regard as far as I can see.”

Symposium S.18, ‘Big data and machine learning in psychiatry: technological breakthroughs for the clinic of the future’ takes place on Monday 8 October between 14.45 and 16.25 at the 31st ECNP Congress in Barcelona.

References
1. Hahn T, Nierenberg AA, Whitfield-Gabrieli S. Predictive analytics in mental health: applications, guidelines, challenges and perspectives. Mol Psychiatry. 2017 Jan;22(1):37-43.
2. Bzdok D, Meyer-Lindenberg A. Machine learning for precision psychiatry. Biol Psychiatry Cogn Neurosci Neuroimaging. 2018 Mar;3(3):223-230.
3. Iniesta R, Stahl D, McGuffin P. Machine learning, statistical learning and the future of biological research in psychiatry. Psychol Med. 2016 Sep;46(12):2455-65.
4. Cuthbert BN, Insel TR. Toward the future of psychiatric diagnosis: the seven pillars of RDoC. BMC Med. 2013 May 14;11:126.
5. Heinz A, Schlagenhauf F, Beck A et al. Dimensional psychiatry: mental disorders as dysfunctions of basic learning mechanisms. J Neural Transm (Vienna). 2016 Aug;123(8):809-21.
6. Lima AN, Philot EA, Trossini GH et al. Use of machine learning approaches for novel drug discovery. Expert Opin Drug Discov. 2016;11(3):225-39.
7. Dwyer DB, Falkai P, Koutsouleris N. Machine Learning Approaches for Clinical Psychology and Psychiatry. Annu Rev Clin Psychol. 2018 May 7;14:91-118.
8. Lin E, Lane HY. Machine learning and systems genomics approaches for multi-omics data. Biomark Res. 2017 Jan 20;5:2.
9. Arbabshirani MR, Plis S, Sui J et al. Single subject prediction of brain disorders in neuroimaging: Promises and pitfalls. Neuroimage. 2017 Jan 15;145(Pt B):137-165.
10. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015 May 28;521(7553):436-44. doi: 10.1038/nature14539.
Contact ECNP
To the website
Share this on:
Follow us:
Facebook Twitter mail to a friend Facebook Twitter instagram